This work addresses the problem of generating 3D holistic body motions from human speech. Given a speech recording, we synthesize sequences of 3D body poses, hand gestures, and facial expressions that are realistic and diverse. To achieve this, we first build a high-quality dataset of 3D holistic body meshes with synchronous speech. We then define a novel speech-to-motion generation framework in which the face, body, and hands are modeled separately. The separated modeling stems from the fact that face articulation strongly correlates with human speech, while body poses and hand gestures are less correlated. Specifically, we employ an autoencoder for face motions, and a compositional vector-quantized variational autoencoder (VQ-VAE) for the body and hand motions. The compositional VQ-VAE is key to generating diverse results. Additionally, we propose a cross-conditional autoregressive model that generates body poses and hand gestures, leading to coherent and realistic motions. Extensive experiments and user studies demonstrate that our proposed approach achieves state-of-the-art performance both qualitatively and quantitatively. Our novel dataset and code will be released for research purposes at https://talkshow.is.tue.mpg.de.
translated by 谷歌翻译
在各个领域(例如政治,健康和娱乐)中的真实和虚假新闻每天都通过在线社交媒体传播,需要对多个领域进行虚假新闻检测。其中,在政治和健康等特定领域中的虚假新闻对现实世界产生了更严重的潜在负面影响(例如,由Covid-19的错误信息引导的流行病)。先前的研究着重于多域假新闻检测,同样采矿和建模域之间的相关性。但是,这些多域方法遇到了SEESAW问题:某些域的性能通常会以损害其他域的性能而改善,这可能导致在特定领域的表现不满意。为了解决这个问题,我们建议一个用于假新闻检测(DITFEND)的域和实例级传输框架,这可以改善特定目标域的性能。为了传递粗粒域级知识,我们从元学习的角度训练了所有域数据的通用模型。为了传输细粒度的实例级知识并将一般模型调整到目标域,我们在目标域上训练语言模型,以评估每个数据实例在源域中的可传递性,并重新赢得每个实例的贡献。两个数据集上的离线实验证明了Ditfend的有效性。在线实验表明,在现实世界中,Ditfend对基本模型带来了更多改进。
translated by 谷歌翻译
序列表示学习的主要挑战是捕获远程时间依赖性。监督序列表示学习的典型方法是基于复发性神经网络构建的,以捕获时间依赖性。这些方法的一个潜在局限性是,它们仅在序列中明确对相邻时间步长的一阶信息相互作用进行建模,因此,未完全利用了非相应时间步长之间的高阶相互作用。它极大地限制了建模远程时间依赖性的能力,因为由于时间信息稀释和梯度消失,无法长期保持一阶相互作用所学的时间特征。为了应对这一限制,我们提出了用于监督序列表示学习的非本地复发性神经记忆(NRNM),该学习执行非本地操作\ Mr {通过自我关注机制}以在滑动时间内学习全阶相互作用内存块和模拟内存块之间的全局相互作用以封闭式的复发方式。因此,我们的模型能够捕获远程依赖性。此外,我们的模型可以蒸馏出高阶相互作用中包含的潜在高级特征。我们验证了NRNM在不同模态的三种序列应用上的有效性和概括,包括序列分类,逐步的顺序预测和序列相似性学习。我们的模型与针对这些序列应用中的每个序列应用专门设计的其他最新方法进行了比较。
translated by 谷歌翻译
这项工作旨在使用带有动作查询的编码器框架(类似于DETR)来推进时间动作检测(TAD),该框架在对象检测中表现出了巨大的成功。但是,如果直接应用于TAD,该框架遇到了几个问题:解码器中争论之间关系的探索不足,由于培训样本数量有限,分类培训不足以及推断时不可靠的分类得分。为此,我们首先提出了解码器中的关系注意机制,该机制根据其关系来指导查询之间的注意力。此外,我们提出了两项​​损失,以促进和稳定行动分类的培训。最后,我们建议在推理时预测每个动作查询的本地化质量,以区分高质量的查询。所提出的命名React的方法在Thumos14上实现了最新性能,其计算成本比以前的方法低得多。此外,还进行了广泛的消融研究,以验证每个提出的组件的有效性。该代码可在https://github.com/sssste/reaeact上获得。
translated by 谷歌翻译
假新闻的广泛传播越来越威胁到个人和社会。在单个领域(例如政治)上自动假新闻发现已做出了巨大的努力。但是,相关性通常存在于多个新闻领域,因此有望同时检测多个域的假新闻。基于我们的分析,我们在多域假新闻检测中提出了两个挑战:1)域转移,是由域,情感,样式等领域之间的差异引起的。世界分类仅输出一个单个领域标签,而不管新闻文章的主题多样性如何。在本文中,我们提出了一个记忆引导的多视图多域假新闻检测框架(M $^3 $ fend),以应对这两个挑战。我们从多视图的角度对新闻作品进行建模,包括语义,情感和风格。具体而言,我们建议一个域存储库来丰富域信息,该信息可以根据可见的新闻和模型域特征来发现潜在的域标签。然后,以丰富的域信息为输入,域适配器可以从各个域中的新闻的多个视图中适应汇总歧视性信息。对英语和中文数据集进行的大量离线实验证明了M $^3 $ fend的有效性,在线测试在实践中验证了其优势。我们的代码可在https://github.com/ictmcg/m3fend上找到。
translated by 谷歌翻译
假新闻在各个领域的社交媒体上广泛传播,这导致了政治,灾害和金融等许多方面的现实世界威胁。大多数现有方法专注于单域假新闻检测(SFND),当这些方法应用于多域假新闻检测时,导致不满意的性能。作为新兴领域,多域假新闻检测(MFND)越来越受到关注。但是,数据分布,例如词频率和传播模式,从域变化,即域移位。面对严重领域转变的挑战,现有的假新闻检测技术对于多域场景表现不佳。因此,要求为MFND设计专业型号。在本文中,我们首先为MFND设计了一个带有域名标签的假新闻数据集的基准,即Weibo21,由4,488个假新闻和来自9个不同领域的4,640个真实新闻组成。我们进一步提出了一种通过利用域门来聚合由专家混合提取的多个表示来聚合的多域假新闻检测模型(MDFend)。实验表明,MDFEND可以显着提高多域假新闻检测的性能。我们的数据集和代码可在https://github.com/kennqiang/mdfend-weibo21获得。
translated by 谷歌翻译
理解和预测代理的未来轨迹对于行为分析,机器人导航,自动驾驶汽车和其他相关应用至关重要。先前的方法主要将轨迹预测视为时间序列的产生。与它们不同的是,这项工作在“垂直”视图中研究了代理的轨迹,即来自光谱域的建模和预测轨迹。轨迹光谱中的不同频带可以分层反映不同尺度上的代理运动偏好。低频和高频部分可以分别代表其粗糙运动趋势和细胞运动变化。因此,我们提出了一个层次网络v $^2 $ -NET,其中包含两个子网络,以层次模型并预测具有轨迹谱的代理的轨迹。粗级关键点估计子网络首先预测了代理轨迹在几个“密钥”频率部分上的“最小”频谱。然后,高级频谱插值子网络插值将这些光谱重建最终预测。实验结果表明,在ETH-COY基准和Stanford Drone DataSet上,V $^2 $ -NET的竞争力和优势。
translated by 谷歌翻译
变压器在许多视觉任务上表现出优选的性能。然而,对于人的任务重新识别(Reid),Vanilla变形金刚将丰富的背景留下了高阶特征关系,这是由于行人的戏剧性变化而不足的局部特征细节。在这项工作中,我们提出了一个全部关系高阶变压器(OH-Figrain)来模拟Reid的全系关系功能。首先,为了加强视觉表示的能力,而不是基于每个空间位置的对查询和隔离键获得注意矩阵,我们进一步逐步以模拟非本地机制的高阶统计信息。我们以先前的混合机制在每个订单的相应层中共享注意力,以降低计算成本。然后,提出了一种基于卷积的本地关系感知模块来提取本地关系和2D位置信息。我们模型的实验结果是优越的有前途,其在市场上显示出最先进的性能-1501,Dukemtmc,MSMT17和occluded-Duke数据集。
translated by 谷歌翻译
In this paper, we introduce a new large-scale face dataset named VGGFace2. The dataset contains 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject. Images are downloaded from Google Image Search and have large variations in pose, age, illumination, ethnicity and profession (e.g. actors, athletes, politicians).The dataset was collected with three goals in mind: (i) to have both a large number of identities and also a large number of images for each identity; (ii) to cover a large range of pose, age and ethnicity; and (iii) to minimise the label noise. We describe how the dataset was collected, in particular the automated and manual filtering stages to ensure a high accuracy for the images of each identity.To assess face recognition performance using the new dataset, we train ResNet-50 (with and without Squeeze-and-Excitation blocks) Convolutional Neural Networks on VG-GFace2, on MS-Celeb-1M, and on their union, and show that training on VGGFace2 leads to improved recognition performance over pose and age. Finally, using the models trained on these datasets, we demonstrate state-of-the-art performance on the face recognition of IJB datasets, exceeding the previous state-of-the-art by a large margin. The dataset and models are publicly available 1 .
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译